How to enhance A/B test results with qualitative data

Posted on February 10, 2021
5 min read

Share

Some of the most successful companies in the world embrace an experimentation culture—where new ideas are tested using scientific methods in order to determine whether they’re successful or not. One of the most basic methodologies used for experimentation is A/B testing.

What is A/B testing?

A/B testing, sometimes known as split testing, is used to test the success between a variation of two things. To do this, A/B testing randomly splits users into two groups so that one group sees the ‘A’ variation and one group sees the ‘B’ variation.

Key metrics are then measured to see if variation ‘A’ or ‘B’ is statistically better at increasing business KPIs compared to the other. By determining and implementing the winning variation, it can lead to large uplifts in conversions—as well as help with continuous improvement in customer experience.

definition of AB testing

The biggest value of A/B testing is that it challenges assumptions and helps businesses make decisions based on data rather than on gut feelings. In particular, A/B testing is a useful methodology as it can be applied to almost anything, whether that's email subject lines, website information architecture, or even new processes. Tests can also be as small as a single copy change or can be as large as a website redesign.

However, even for the best designed A/B tests, it’s nearly impossible to pinpoint the exact reason(s) why one variation succeeded over another. Although these experiments are driven by hypothesis—often with controlled variations between the designs—there may be alternate reasons behind the success of one variation over another. 

The challenges with A/B testing and how to overcome them

Unfortunately, A/B testing is not without difficulties. The issue of not knowing and understanding why a variation won over another is common and can be even more serious when testing innovative ideas and variations of designs that look completely different from the control of the test. So what do we do when we come across a winning variation of a test?

Some may say you implement the winner and test the next hypothesis. However, this approach means there is an ultimate end to experimentation when all hypotheses have been tested. What’s important to remember is that although a variation may have won, this does not mean the winning variation is the best solution for that hypothesis. In other words, you may need to continue to iterate on the winner to improve customer experience further even when your original hypothesis has already been proven.

Generally, the problem with A/B testing is that it provides the quantitative side of the story—where the numbers tell you which variation is better, however, it leaves you with many questions about why that variation is better.

Get more from your A/B test results with qualitative insights

Qualitative data is the non-numerical data that you can collect from your users in order to help you understand the drivers behind the quantitative data. Qualitative data can arrive from different methods such as user interviews, survey data, customer reviews, usability tests, and so on.

A/B tests on websites, for example, give you the quantitative data you need to make decisions on which is the best performing variation, however, it’s the qualitative data that helps you to improve on these test winners.

One way to get the missing qualitative data is to ask users to go through the online journey of the winning variation and ask them to provide feedback for the winner that has been implemented. And it doesn’t have to be complicated.

3 simple steps for gathering qualitative data for A/B tests

To better understand why one variation outperformed another, you need to show both variations to the customer. This way you're able to get users to compare the differences in the two variations and hear what it is specifically about one of them that makes it more preferable.

Here are 3 easy steps for doing this:

  1. Get users to go through both variations of the design, with the same task and questions to avoid bias.
  2. Ensure the order of presentation of the designs is counterbalanced. Use a balanced comparison test to ensure the number of participants seeing variation A first is the same as the number of participants that see variation B first. This helps to avoid any primacy-recency bias in participants (where they prefer a certain option because they have seen it first/last).
  3. Finally, compare the responses you received on variation A compared to the responses you received on variation B to see if there are any key themes, motivations, or barriers that stood out. The differences spotted in the feedback will help you understand why a variation won over the other and will help you with further iterations.

Pro tip: always be open to feedback, even if the qualitative data contradicts the quantitative data. If this is the case, use the qualitative data to help guide conversations and inspire new hypotheses. The contradictory findings will actually help you to understand whether there are further underlying issues in the customer journey.

For example, users may say they like more product choices on the site but the winner of the A/B test was the variation with a limited number of product choices. Although the qualitative and quantitative insights contradict, it helps raise questions on whether the A/B test was testing the right hypothesis to start off with. Without qualitative insights, it’s difficult to determine (with certainty) if your winning design is actually the best design.

In conclusion, always test, test, test. Test from a quantitative perspective by putting AB tests on website but also test from a qualitative perspective using methods like user testing. In combination, the insights and data gained from both methods help you build better iterations to further test again. It is with this test and learn methodology that leads businesses to success.

Cover illustration for UserTesting's complete guide to testing websites, apps, and prototypes

Get started with experience research

Everything you need to know to effectively plan, conduct, and analyze remote experience research.

In this Article

    About the author(s)

    Diane Leung — Cardiff Pinnacle

    Dr. Diane Leung is the Conversion Rate Optimisation Manager at Cardif Pinnacle, providing pet insurance to customers through Cardif Pinnacle’s own brand, Everypaw, and through its partners. With a Ph.D. in Psychology, Diane uses her extensive research skills and understanding of human decision making to drive experimentation, challenge mindsets, and facilitate continuous growth within businesses.

    Related Blog Posts

    • Photo of UserTesting THiS London stage

      Blog

      Digital innovation and insights driving customer-centric transformation: THiS Connect London 2024

      The Human Insight Summit (THiS) Connect: London 2024 was a must-attend event for digital...
    • Blog

      How to achieve product-market fit

      According to CISQ, $2.26 trillion is spent on software re-work in the US So...
    • Two colleagues looking at charts on a tablet

      Blog

      Benchmarking UX: how to track improvements over time

      A fellow UX professional recently told me a fun story: Her boss came to...